2025-06-18 03:09:52,990 INFO mapreduce.MiniHadoopClusterManager: Updated 0 configuration settings from command line. 2025-06-18 03:09:53,033 INFO hdfs.MiniDFSCluster: starting cluster: numNameNodes=1, numDataNodes=1 2025-06-18 03:09:53,353 INFO namenode.NameNode: Formatting using clusterid: testClusterID 2025-06-18 03:09:53,364 INFO namenode.FSEditLog: Edit logging is async:true 2025-06-18 03:09:53,381 INFO namenode.FSNamesystem: KeyProvider: null 2025-06-18 03:09:53,382 INFO namenode.FSNamesystem: fsLock is fair: true 2025-06-18 03:09:53,382 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 2025-06-18 03:09:53,403 INFO namenode.FSNamesystem: fsOwner = clickhouse (auth:SIMPLE) 2025-06-18 03:09:53,403 INFO namenode.FSNamesystem: supergroup = supergroup 2025-06-18 03:09:53,403 INFO namenode.FSNamesystem: isPermissionEnabled = true 2025-06-18 03:09:53,403 INFO namenode.FSNamesystem: isStoragePolicyEnabled = true 2025-06-18 03:09:53,404 INFO namenode.FSNamesystem: HA Enabled: false 2025-06-18 03:09:53,431 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2025-06-18 03:09:53,433 INFO Configuration.deprecation: hadoop.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping 2025-06-18 03:09:53,433 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 2025-06-18 03:09:53,433 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 2025-06-18 03:09:53,436 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2025-06-18 03:09:53,436 INFO blockmanagement.BlockManager: The block deletion will start around 2025 Jun 18 03:09:53 2025-06-18 03:09:53,437 INFO util.GSet: Computing capacity for map BlocksMap 2025-06-18 03:09:53,437 INFO util.GSet: VM type = 64-bit 2025-06-18 03:09:53,438 INFO util.GSet: 2.0% max memory 7.7 GB = 156.7 MB 2025-06-18 03:09:53,438 INFO util.GSet: capacity = 2^24 = 16777216 entries 2025-06-18 03:09:53,462 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled 2025-06-18 03:09:53,462 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false 2025-06-18 03:09:53,466 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999 2025-06-18 03:09:53,466 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 2025-06-18 03:09:53,466 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 0 2025-06-18 03:09:53,467 INFO blockmanagement.BlockManager: defaultReplication = 1 2025-06-18 03:09:53,467 INFO blockmanagement.BlockManager: maxReplication = 512 2025-06-18 03:09:53,467 INFO blockmanagement.BlockManager: minReplication = 1 2025-06-18 03:09:53,467 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 2025-06-18 03:09:53,467 INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms 2025-06-18 03:09:53,467 INFO blockmanagement.BlockManager: encryptDataTransfer = false 2025-06-18 03:09:53,467 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 2025-06-18 03:09:53,482 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911 2025-06-18 03:09:53,482 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215 2025-06-18 03:09:53,482 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215 2025-06-18 03:09:53,482 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215 2025-06-18 03:09:53,494 INFO util.GSet: Computing capacity for map INodeMap 2025-06-18 03:09:53,494 INFO util.GSet: VM type = 64-bit 2025-06-18 03:09:53,496 INFO util.GSet: 1.0% max memory 7.7 GB = 78.3 MB 2025-06-18 03:09:53,496 INFO util.GSet: capacity = 2^23 = 8388608 entries 2025-06-18 03:09:53,505 INFO namenode.FSDirectory: ACLs enabled? true 2025-06-18 03:09:53,505 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true 2025-06-18 03:09:53,505 INFO namenode.FSDirectory: XAttrs enabled? true 2025-06-18 03:09:53,505 INFO namenode.NameNode: Caching file names occurring more than 10 times 2025-06-18 03:09:53,509 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 2025-06-18 03:09:53,510 INFO snapshot.SnapshotManager: SkipList is disabled 2025-06-18 03:09:53,513 INFO util.GSet: Computing capacity for map cachedBlocks 2025-06-18 03:09:53,513 INFO util.GSet: VM type = 64-bit 2025-06-18 03:09:53,513 INFO util.GSet: 0.25% max memory 7.7 GB = 19.6 MB 2025-06-18 03:09:53,513 INFO util.GSet: capacity = 2^21 = 2097152 entries 2025-06-18 03:09:53,520 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 2025-06-18 03:09:53,520 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 2025-06-18 03:09:53,520 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2025-06-18 03:09:53,523 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 2025-06-18 03:09:53,523 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2025-06-18 03:09:53,524 INFO util.GSet: Computing capacity for map NameNodeRetryCache 2025-06-18 03:09:53,524 INFO util.GSet: VM type = 64-bit 2025-06-18 03:09:53,524 INFO util.GSet: 0.029999999329447746% max memory 7.7 GB = 2.4 MB 2025-06-18 03:09:53,524 INFO util.GSet: capacity = 2^18 = 262144 entries 2025-06-18 03:09:53,541 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1824230658-172.17.0.2-1750226993535 2025-06-18 03:09:53,551 INFO common.Storage: Storage directory /hadoop-3.3.1/target/test/data/dfs/name-0-1 has been successfully formatted. 2025-06-18 03:09:53,555 INFO common.Storage: Storage directory /hadoop-3.3.1/target/test/data/dfs/name-0-2 has been successfully formatted. 2025-06-18 03:09:53,588 INFO namenode.FSImageFormatProtobuf: Saving image file /hadoop-3.3.1/target/test/data/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression 2025-06-18 03:09:53,594 INFO namenode.FSImageFormatProtobuf: Saving image file /hadoop-3.3.1/target/test/data/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression 2025-06-18 03:09:53,690 INFO namenode.FSImageFormatProtobuf: Image file /hadoop-3.3.1/target/test/data/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 405 bytes saved in 0 seconds . 2025-06-18 03:09:53,690 INFO namenode.FSImageFormatProtobuf: Image file /hadoop-3.3.1/target/test/data/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 405 bytes saved in 0 seconds . 2025-06-18 03:09:53,702 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 2025-06-18 03:09:53,743 INFO namenode.FSNamesystem: Stopping services started for active state 2025-06-18 03:09:53,744 INFO namenode.FSNamesystem: Stopping services started for standby state 2025-06-18 03:09:53,745 INFO namenode.NameNode: createNameNode [] 2025-06-18 03:09:53,783 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2025-06-18 03:09:53,846 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2025-06-18 03:09:53,846 INFO impl.MetricsSystemImpl: NameNode metrics system started 2025-06-18 03:09:53,860 INFO namenode.NameNodeUtils: fs.defaultFS is hdfs://127.0.0.1:12222 2025-06-18 03:09:53,866 INFO namenode.NameNode: Clients should use 127.0.0.1:12222 to access this namenode/service. 2025-06-18 03:09:53,899 INFO util.JvmPauseMonitor: Starting JVM pause monitor 2025-06-18 03:09:53,906 INFO hdfs.DFSUtil: Filter initializers set : org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.hdfs.web.AuthFilterInitializer 2025-06-18 03:09:53,908 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://localhost:0 2025-06-18 03:09:53,916 INFO util.log: Logging initialized @1321ms to org.eclipse.jetty.util.log.Slf4jLog 2025-06-18 03:09:53,966 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2025-06-18 03:09:53,969 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined 2025-06-18 03:09:53,973 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2025-06-18 03:09:53,974 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 2025-06-18 03:09:53,974 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2025-06-18 03:09:53,974 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2025-06-18 03:09:53,975 INFO http.HttpServer2: Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context hdfs 2025-06-18 03:09:53,976 INFO http.HttpServer2: Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context logs 2025-06-18 03:09:53,976 INFO http.HttpServer2: Added filter AuthFilter (class=org.apache.hadoop.hdfs.web.AuthFilter) to context static 2025-06-18 03:09:53,993 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 2025-06-18 03:09:53,998 INFO http.HttpServer2: Jetty bound to port 36641 2025-06-18 03:09:53,998 INFO server.Server: jetty-9.4.40.v20210413; built: 2021-04-13T20:42:42.668Z; git: b881a572662e1943a14ae12e7e1207989f218b74; jvm 11.0.25+9-post-Ubuntu-1ubuntu122.04 2025-06-18 03:09:54,014 INFO server.session: DefaultSessionIdManager workerName=node0 2025-06-18 03:09:54,014 INFO server.session: No SessionScavenger set, using defaults 2025-06-18 03:09:54,015 INFO server.session: node0 Scavenging every 600000ms 2025-06-18 03:09:54,024 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2025-06-18 03:09:54,026 INFO handler.ContextHandler: Started o.e.j.s.ServletContextHandler@76f10035{logs,/logs,file:///hadoop-3.3.1/logs,AVAILABLE} 2025-06-18 03:09:54,026 INFO handler.ContextHandler: Started o.e.j.s.ServletContextHandler@17f460bb{static,/static,file:///hadoop-3.3.1/share/hadoop/hdfs/webapps/static/,AVAILABLE} 2025-06-18 03:09:54,081 INFO handler.ContextHandler: Started o.e.j.w.WebAppContext@56de6d6b{hdfs,/,file:///hadoop-3.3.1/share/hadoop/hdfs/webapps/hdfs/,AVAILABLE}{file:/hadoop-3.3.1/share/hadoop/hdfs/webapps/hdfs} 2025-06-18 03:09:54,091 INFO server.AbstractConnector: Started ServerConnector@55f45b92{HTTP/1.1, (http/1.1)}{localhost:36641} 2025-06-18 03:09:54,091 INFO server.Server: Started @1496ms 2025-06-18 03:09:54,095 INFO namenode.FSEditLog: Edit logging is async:true 2025-06-18 03:09:54,105 INFO namenode.FSNamesystem: KeyProvider: null 2025-06-18 03:09:54,105 INFO namenode.FSNamesystem: fsLock is fair: true 2025-06-18 03:09:54,105 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 2025-06-18 03:09:54,105 INFO namenode.FSNamesystem: fsOwner = clickhouse (auth:SIMPLE) 2025-06-18 03:09:54,105 INFO namenode.FSNamesystem: supergroup = supergroup 2025-06-18 03:09:54,105 INFO namenode.FSNamesystem: isPermissionEnabled = true 2025-06-18 03:09:54,105 INFO namenode.FSNamesystem: isStoragePolicyEnabled = true 2025-06-18 03:09:54,105 INFO namenode.FSNamesystem: HA Enabled: false 2025-06-18 03:09:54,106 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2025-06-18 03:09:54,106 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 2025-06-18 03:09:54,106 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 2025-06-18 03:09:54,106 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2025-06-18 03:09:54,106 INFO blockmanagement.BlockManager: The block deletion will start around 2025 Jun 18 03:09:54 2025-06-18 03:09:54,106 INFO util.GSet: Computing capacity for map BlocksMap 2025-06-18 03:09:54,106 INFO util.GSet: VM type = 64-bit 2025-06-18 03:09:54,106 INFO util.GSet: 2.0% max memory 7.7 GB = 156.7 MB 2025-06-18 03:09:54,106 INFO util.GSet: capacity = 2^24 = 16777216 entries 2025-06-18 03:09:54,111 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled 2025-06-18 03:09:54,111 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false 2025-06-18 03:09:54,111 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999 2025-06-18 03:09:54,111 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 2025-06-18 03:09:54,111 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 0 2025-06-18 03:09:54,111 INFO blockmanagement.BlockManager: defaultReplication = 1 2025-06-18 03:09:54,111 INFO blockmanagement.BlockManager: maxReplication = 512 2025-06-18 03:09:54,111 INFO blockmanagement.BlockManager: minReplication = 1 2025-06-18 03:09:54,111 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 2025-06-18 03:09:54,111 INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms 2025-06-18 03:09:54,111 INFO blockmanagement.BlockManager: encryptDataTransfer = false 2025-06-18 03:09:54,111 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 2025-06-18 03:09:54,111 INFO util.GSet: Computing capacity for map INodeMap 2025-06-18 03:09:54,111 INFO util.GSet: VM type = 64-bit 2025-06-18 03:09:54,111 INFO util.GSet: 1.0% max memory 7.7 GB = 78.3 MB 2025-06-18 03:09:54,111 INFO util.GSet: capacity = 2^23 = 8388608 entries 2025-06-18 03:09:54,121 INFO namenode.FSDirectory: ACLs enabled? true 2025-06-18 03:09:54,121 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true 2025-06-18 03:09:54,121 INFO namenode.FSDirectory: XAttrs enabled? true 2025-06-18 03:09:54,121 INFO namenode.NameNode: Caching file names occurring more than 10 times 2025-06-18 03:09:54,121 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 2025-06-18 03:09:54,121 INFO snapshot.SnapshotManager: SkipList is disabled 2025-06-18 03:09:54,121 INFO util.GSet: Computing capacity for map cachedBlocks 2025-06-18 03:09:54,121 INFO util.GSet: VM type = 64-bit 2025-06-18 03:09:54,121 INFO util.GSet: 0.25% max memory 7.7 GB = 19.6 MB 2025-06-18 03:09:54,122 INFO util.GSet: capacity = 2^21 = 2097152 entries 2025-06-18 03:09:54,124 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 2025-06-18 03:09:54,124 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 2025-06-18 03:09:54,124 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2025-06-18 03:09:54,124 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 2025-06-18 03:09:54,124 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2025-06-18 03:09:54,124 INFO util.GSet: Computing capacity for map NameNodeRetryCache 2025-06-18 03:09:54,124 INFO util.GSet: VM type = 64-bit 2025-06-18 03:09:54,125 INFO util.GSet: 0.029999999329447746% max memory 7.7 GB = 2.4 MB 2025-06-18 03:09:54,125 INFO util.GSet: capacity = 2^18 = 262144 entries 2025-06-18 03:09:54,133 INFO common.Storage: Lock on /hadoop-3.3.1/target/test/data/dfs/name-0-1/in_use.lock acquired by nodename 430@56f8ad578229 2025-06-18 03:09:54,136 INFO common.Storage: Lock on /hadoop-3.3.1/target/test/data/dfs/name-0-2/in_use.lock acquired by nodename 430@56f8ad578229 2025-06-18 03:09:54,137 INFO namenode.FileJournalManager: Recovering unfinalized segments in /hadoop-3.3.1/target/test/data/dfs/name-0-1/current 2025-06-18 03:09:54,137 INFO namenode.FileJournalManager: Recovering unfinalized segments in /hadoop-3.3.1/target/test/data/dfs/name-0-2/current 2025-06-18 03:09:54,137 INFO namenode.FSImage: No edit log streams selected. 2025-06-18 03:09:54,137 INFO namenode.FSImage: Planning to load image: FSImageFile(file=/hadoop-3.3.1/target/test/data/dfs/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000) 2025-06-18 03:09:54,152 INFO namenode.FSImageFormatPBINode: Loading 1 INodes. 2025-06-18 03:09:54,153 INFO namenode.FSImageFormatPBINode: Successfully loaded 1 inodes 2025-06-18 03:09:54,156 INFO namenode.FSImageFormatPBINode: Completed update blocks map and name cache, total waiting duration 0ms. 2025-06-18 03:09:54,157 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds. 2025-06-18 03:09:54,157 INFO namenode.FSImage: Loaded image for txid 0 from /hadoop-3.3.1/target/test/data/dfs/name-0-1/current/fsimage_0000000000000000000 2025-06-18 03:09:54,159 INFO namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false) 2025-06-18 03:09:54,160 INFO namenode.FSEditLog: Starting log segment at 1 2025-06-18 03:09:54,173 INFO namenode.NameCache: initialized with 0 entries 0 lookups 2025-06-18 03:09:54,173 INFO namenode.FSNamesystem: Finished loading FSImage in 47 msecs 2025-06-18 03:09:54,266 INFO namenode.NameNode: RPC server is binding to localhost:12222 2025-06-18 03:09:54,266 INFO namenode.NameNode: Enable NameNode state context:false 2025-06-18 03:09:54,271 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false. 2025-06-18 03:09:54,278 INFO ipc.Server: Starting Socket Reader #1 for port 12222 2025-06-18 03:09:54,410 INFO namenode.FSNamesystem: Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans. 2025-06-18 03:09:54,419 INFO namenode.LeaseManager: Number of blocks under construction: 0 2025-06-18 03:09:54,423 INFO blockmanagement.DatanodeAdminDefaultMonitor: Initialized the Default Decommission and Maintenance monitor 2025-06-18 03:09:54,425 INFO blockmanagement.BlockManager: initializing replication queues 2025-06-18 03:09:54,425 INFO hdfs.StateChange: STATE* Leaving safe mode after 0 secs 2025-06-18 03:09:54,425 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes 2025-06-18 03:09:54,425 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks 2025-06-18 03:09:54,437 INFO blockmanagement.BlockManager: Total number of blocks = 0 2025-06-18 03:09:54,437 INFO blockmanagement.BlockManager: Number of invalid blocks = 0 2025-06-18 03:09:54,437 INFO blockmanagement.BlockManager: Number of under-replicated blocks = 0 2025-06-18 03:09:54,437 INFO blockmanagement.BlockManager: Number of over-replicated blocks = 0 2025-06-18 03:09:54,437 INFO blockmanagement.BlockManager: Number of blocks being written = 0 2025-06-18 03:09:54,437 INFO hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 12 msec 2025-06-18 03:09:54,447 INFO ipc.Server: IPC Server Responder: starting 2025-06-18 03:09:54,448 INFO ipc.Server: IPC Server listener on 12222: starting 2025-06-18 03:09:54,453 INFO namenode.NameNode: NameNode RPC up at: localhost/127.0.0.1:12222 2025-06-18 03:09:54,456 INFO namenode.FSNamesystem: Starting services required for active state 2025-06-18 03:09:54,457 INFO namenode.FSDirectory: Initializing quota with 12 thread(s) 2025-06-18 03:09:54,462 INFO namenode.FSDirectory: Quota initialization completed in 5 milliseconds name space=1 storage space=0 storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0, PROVIDED=0 2025-06-18 03:09:54,468 INFO blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds 2025-06-18 03:09:54,478 INFO hdfs.MiniDFSCluster: Starting DataNode 0 with dfs.datanode.data.dir: [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data1,[DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data2 2025-06-18 03:09:54,525 INFO checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data1 2025-06-18 03:09:54,531 INFO checker.ThrottledAsyncChecker: Scheduling a check for [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data2 2025-06-18 03:09:54,562 INFO impl.MetricsSystemImpl: DataNode metrics system started (again) 2025-06-18 03:09:54,567 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2025-06-18 03:09:54,569 INFO datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576 2025-06-18 03:09:54,573 INFO datanode.DataNode: Configured hostname is 127.0.0.1 2025-06-18 03:09:54,574 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2025-06-18 03:09:54,577 INFO datanode.DataNode: Starting DataNode with maxLockedMemory = 0 2025-06-18 03:09:54,583 INFO datanode.DataNode: Opened streaming server at /127.0.0.1:35563 2025-06-18 03:09:54,585 INFO datanode.DataNode: Balancing bandwidth is 104857600 bytes/s 2025-06-18 03:09:54,585 INFO datanode.DataNode: Number threads for balancing is 100 2025-06-18 03:09:54,609 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2025-06-18 03:09:54,610 INFO http.HttpRequestLog: Http request log for http.requests.datanode is not defined 2025-06-18 03:09:54,613 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2025-06-18 03:09:54,615 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode 2025-06-18 03:09:54,615 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2025-06-18 03:09:54,615 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2025-06-18 03:09:54,619 INFO http.HttpServer2: Jetty bound to port 33877 2025-06-18 03:09:54,619 INFO server.Server: jetty-9.4.40.v20210413; built: 2021-04-13T20:42:42.668Z; git: b881a572662e1943a14ae12e7e1207989f218b74; jvm 11.0.25+9-post-Ubuntu-1ubuntu122.04 2025-06-18 03:09:54,620 INFO server.session: DefaultSessionIdManager workerName=node0 2025-06-18 03:09:54,621 INFO server.session: No SessionScavenger set, using defaults 2025-06-18 03:09:54,621 INFO server.session: node0 Scavenging every 600000ms 2025-06-18 03:09:54,625 INFO handler.ContextHandler: Started o.e.j.s.ServletContextHandler@2687725a{logs,/logs,file:///hadoop-3.3.1/logs,AVAILABLE} 2025-06-18 03:09:54,626 INFO handler.ContextHandler: Started o.e.j.s.ServletContextHandler@2c05ff9d{static,/static,file:///hadoop-3.3.1/share/hadoop/hdfs/webapps/static/,AVAILABLE} 2025-06-18 03:09:54,636 INFO handler.ContextHandler: Started o.e.j.w.WebAppContext@78b612c6{datanode,/,file:///hadoop-3.3.1/share/hadoop/hdfs/webapps/datanode/,AVAILABLE}{file:/hadoop-3.3.1/share/hadoop/hdfs/webapps/datanode} 2025-06-18 03:09:54,638 INFO server.AbstractConnector: Started ServerConnector@3f50b680{HTTP/1.1, (http/1.1)}{localhost:33877} 2025-06-18 03:09:54,638 INFO server.Server: Started @2043ms 2025-06-18 03:09:54,719 WARN web.DatanodeHttpServer: Got null for restCsrfPreventionFilter - will not do any filtering. 2025-06-18 03:09:54,767 INFO web.DatanodeHttpServer: Listening HTTP traffic on /127.0.0.1:37863 2025-06-18 03:09:54,768 INFO util.JvmPauseMonitor: Starting JVM pause monitor 2025-06-18 03:09:54,769 INFO datanode.DataNode: dnUserName = clickhouse 2025-06-18 03:09:54,769 INFO datanode.DataNode: supergroup = supergroup 2025-06-18 03:09:54,778 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false. 2025-06-18 03:09:54,778 INFO ipc.Server: Starting Socket Reader #1 for port 0 2025-06-18 03:09:54,783 INFO datanode.DataNode: Opened IPC server at /127.0.0.1:34819 2025-06-18 03:09:54,799 INFO datanode.DataNode: Refresh request received for nameservices: null 2025-06-18 03:09:54,800 INFO datanode.DataNode: Starting BPOfferServices for nameservices: 2025-06-18 03:09:54,806 INFO datanode.DataNode: Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:12222 starting to offer service 2025-06-18 03:09:54,811 INFO ipc.Server: IPC Server Responder: starting 2025-06-18 03:09:54,812 INFO ipc.Server: IPC Server listener on 0: starting 2025-06-18 03:09:55,002 INFO datanode.DataNode: Acknowledging ACTIVE Namenode during handshakeBlock pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:12222 2025-06-18 03:09:55,005 INFO common.Storage: Using 2 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=2, dataDirs=2) 2025-06-18 03:09:55,009 INFO common.Storage: Lock on /hadoop-3.3.1/target/test/data/dfs/data/data1/in_use.lock acquired by nodename 430@56f8ad578229 2025-06-18 03:09:55,010 INFO common.Storage: Storage directory with location [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data1 is not formatted for namespace 2133025505. Formatting... 2025-06-18 03:09:55,011 INFO common.Storage: Generated new storageID DS-646bdffe-07a9-484d-b454-006de0117db0 for directory /hadoop-3.3.1/target/test/data/dfs/data/data1 2025-06-18 03:09:55,018 INFO common.Storage: Lock on /hadoop-3.3.1/target/test/data/dfs/data/data2/in_use.lock acquired by nodename 430@56f8ad578229 2025-06-18 03:09:55,018 INFO common.Storage: Storage directory with location [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data2 is not formatted for namespace 2133025505. Formatting... 2025-06-18 03:09:55,019 INFO common.Storage: Generated new storageID DS-e066f37b-b531-4ee0-bee5-fcd49b2a1988 for directory /hadoop-3.3.1/target/test/data/dfs/data/data2 2025-06-18 03:09:55,048 INFO common.Storage: Analyzing storage directories for bpid BP-1824230658-172.17.0.2-1750226993535 2025-06-18 03:09:55,048 INFO common.Storage: Locking is disabled for /hadoop-3.3.1/target/test/data/dfs/data/data1/current/BP-1824230658-172.17.0.2-1750226993535 2025-06-18 03:09:55,048 INFO common.Storage: Block pool storage directory for location [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data1 and block pool id BP-1824230658-172.17.0.2-1750226993535 is not formatted. Formatting ... 2025-06-18 03:09:55,048 INFO common.Storage: Formatting block pool BP-1824230658-172.17.0.2-1750226993535 directory /hadoop-3.3.1/target/test/data/dfs/data/data1/current/BP-1824230658-172.17.0.2-1750226993535/current 2025-06-18 03:09:55,064 INFO common.Storage: Analyzing storage directories for bpid BP-1824230658-172.17.0.2-1750226993535 2025-06-18 03:09:55,065 INFO common.Storage: Locking is disabled for /hadoop-3.3.1/target/test/data/dfs/data/data2/current/BP-1824230658-172.17.0.2-1750226993535 2025-06-18 03:09:55,065 INFO common.Storage: Block pool storage directory for location [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data2 and block pool id BP-1824230658-172.17.0.2-1750226993535 is not formatted. Formatting ... 2025-06-18 03:09:55,065 INFO common.Storage: Formatting block pool BP-1824230658-172.17.0.2-1750226993535 directory /hadoop-3.3.1/target/test/data/dfs/data/data2/current/BP-1824230658-172.17.0.2-1750226993535/current 2025-06-18 03:09:55,068 INFO datanode.DataNode: Setting up storage: nsid=2133025505;bpid=BP-1824230658-172.17.0.2-1750226993535;lv=-57;nsInfo=lv=-66;cid=testClusterID;nsid=2133025505;c=1750226993535;bpid=BP-1824230658-172.17.0.2-1750226993535;dnuuid=null 2025-06-18 03:09:55,071 INFO datanode.DataNode: Generated and persisted new Datanode UUID cd7a2e6c-0178-49c9-846a-e8b940ae9691 2025-06-18 03:09:55,086 INFO impl.FsDatasetImpl: The datanode lock is a read write lock 2025-06-18 03:09:55,094 INFO hdfs.MiniDFSCluster: dnInfo.length != numDataNodes 2025-06-18 03:09:55,094 INFO hdfs.MiniDFSCluster: Waiting for cluster to become active 2025-06-18 03:09:55,142 INFO impl.FsDatasetImpl: Added new volume: DS-646bdffe-07a9-484d-b454-006de0117db0 2025-06-18 03:09:55,142 INFO impl.FsDatasetImpl: Added volume - [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data1, StorageType: DISK 2025-06-18 03:09:55,144 INFO impl.FsDatasetImpl: Added new volume: DS-e066f37b-b531-4ee0-bee5-fcd49b2a1988 2025-06-18 03:09:55,144 INFO impl.FsDatasetImpl: Added volume - [DISK]file:/hadoop-3.3.1/target/test/data/dfs/data/data2, StorageType: DISK 2025-06-18 03:09:55,147 INFO impl.MemoryMappableBlockLoader: Initializing cache loader: MemoryMappableBlockLoader. 2025-06-18 03:09:55,149 INFO impl.FsDatasetImpl: Registered FSDatasetState MBean 2025-06-18 03:09:55,151 INFO impl.FsDatasetImpl: Adding block pool BP-1824230658-172.17.0.2-1750226993535 2025-06-18 03:09:55,152 INFO impl.FsDatasetImpl: Scanning block pool BP-1824230658-172.17.0.2-1750226993535 on volume /hadoop-3.3.1/target/test/data/dfs/data/data1... 2025-06-18 03:09:55,153 INFO impl.FsDatasetImpl: Scanning block pool BP-1824230658-172.17.0.2-1750226993535 on volume /hadoop-3.3.1/target/test/data/dfs/data/data2... 2025-06-18 03:09:55,191 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-1824230658-172.17.0.2-1750226993535 on /hadoop-3.3.1/target/test/data/dfs/data/data1: 39ms 2025-06-18 03:09:55,193 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-1824230658-172.17.0.2-1750226993535 on /hadoop-3.3.1/target/test/data/dfs/data/data2: 40ms 2025-06-18 03:09:55,193 INFO impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1824230658-172.17.0.2-1750226993535: 41ms 2025-06-18 03:09:55,194 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-1824230658-172.17.0.2-1750226993535 on volume /hadoop-3.3.1/target/test/data/dfs/data/data1... 2025-06-18 03:09:55,194 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-1824230658-172.17.0.2-1750226993535 on volume /hadoop-3.3.1/target/test/data/dfs/data/data2... 2025-06-18 03:09:55,194 INFO impl.BlockPoolSlice: Replica Cache file: /hadoop-3.3.1/target/test/data/dfs/data/data1/current/BP-1824230658-172.17.0.2-1750226993535/current/replicas doesn't exist 2025-06-18 03:09:55,194 INFO impl.BlockPoolSlice: Replica Cache file: /hadoop-3.3.1/target/test/data/dfs/data/data2/current/BP-1824230658-172.17.0.2-1750226993535/current/replicas doesn't exist 2025-06-18 03:09:55,196 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1824230658-172.17.0.2-1750226993535 on volume /hadoop-3.3.1/target/test/data/dfs/data/data1: 3ms 2025-06-18 03:09:55,197 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1824230658-172.17.0.2-1750226993535 on volume /hadoop-3.3.1/target/test/data/dfs/data/data2: 2ms 2025-06-18 03:09:55,197 INFO impl.FsDatasetImpl: Total time to add all replicas to map for block pool BP-1824230658-172.17.0.2-1750226993535: 3ms 2025-06-18 03:09:55,197 INFO checker.ThrottledAsyncChecker: Scheduling a check for /hadoop-3.3.1/target/test/data/dfs/data/data1 2025-06-18 03:09:55,199 INFO hdfs.MiniDFSCluster: dnInfo.length != numDataNodes 2025-06-18 03:09:55,199 INFO hdfs.MiniDFSCluster: Waiting for cluster to become active 2025-06-18 03:09:55,206 INFO checker.DatasetVolumeChecker: Scheduled health check for volume /hadoop-3.3.1/target/test/data/dfs/data/data1 2025-06-18 03:09:55,208 INFO checker.ThrottledAsyncChecker: Scheduling a check for /hadoop-3.3.1/target/test/data/dfs/data/data2 2025-06-18 03:09:55,208 INFO checker.DatasetVolumeChecker: Scheduled health check for volume /hadoop-3.3.1/target/test/data/dfs/data/data2 2025-06-18 03:09:55,210 INFO datanode.VolumeScanner: Now scanning bpid BP-1824230658-172.17.0.2-1750226993535 on volume /hadoop-3.3.1/target/test/data/dfs/data/data1 2025-06-18 03:09:55,210 INFO datanode.VolumeScanner: Now scanning bpid BP-1824230658-172.17.0.2-1750226993535 on volume /hadoop-3.3.1/target/test/data/dfs/data/data2 2025-06-18 03:09:55,212 INFO datanode.VolumeScanner: VolumeScanner(/hadoop-3.3.1/target/test/data/dfs/data/data2, DS-e066f37b-b531-4ee0-bee5-fcd49b2a1988): finished scanning block pool BP-1824230658-172.17.0.2-1750226993535 2025-06-18 03:09:55,212 INFO datanode.VolumeScanner: VolumeScanner(/hadoop-3.3.1/target/test/data/dfs/data/data1, DS-646bdffe-07a9-484d-b454-006de0117db0): finished scanning block pool BP-1824230658-172.17.0.2-1750226993535 2025-06-18 03:09:55,212 WARN datanode.DirectoryScanner: dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value above 1000 ms/sec. Assuming default value of -1 2025-06-18 03:09:55,213 INFO datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting in 10076329ms with interval of 21600000ms and throttle limit of -1ms/s 2025-06-18 03:09:55,220 INFO datanode.DataNode: Block pool BP-1824230658-172.17.0.2-1750226993535 (Datanode Uuid cd7a2e6c-0178-49c9-846a-e8b940ae9691) service to localhost/127.0.0.1:12222 beginning handshake with NN 2025-06-18 03:09:55,228 INFO datanode.VolumeScanner: VolumeScanner(/hadoop-3.3.1/target/test/data/dfs/data/data1, DS-646bdffe-07a9-484d-b454-006de0117db0): no suitable block pools found to scan. Waiting 1814399982 ms. 2025-06-18 03:09:55,228 INFO datanode.VolumeScanner: VolumeScanner(/hadoop-3.3.1/target/test/data/dfs/data/data2, DS-e066f37b-b531-4ee0-bee5-fcd49b2a1988): no suitable block pools found to scan. Waiting 1814399982 ms. 2025-06-18 03:09:55,239 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1:35563, datanodeUuid=cd7a2e6c-0178-49c9-846a-e8b940ae9691, infoPort=37863, infoSecurePort=0, ipcPort=34819, storageInfo=lv=-57;cid=testClusterID;nsid=2133025505;c=1750226993535) storage cd7a2e6c-0178-49c9-846a-e8b940ae9691 2025-06-18 03:09:55,240 INFO net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:35563 2025-06-18 03:09:55,241 INFO blockmanagement.BlockReportLeaseManager: Registered DN cd7a2e6c-0178-49c9-846a-e8b940ae9691 (127.0.0.1:35563). 2025-06-18 03:09:55,245 INFO datanode.DataNode: Block pool BP-1824230658-172.17.0.2-1750226993535 (Datanode Uuid cd7a2e6c-0178-49c9-846a-e8b940ae9691) service to localhost/127.0.0.1:12222 successfully registered with NN 2025-06-18 03:09:55,245 INFO datanode.DataNode: For namenode localhost/127.0.0.1:12222 using BLOCKREPORT_INTERVAL of 21600000msecs CACHEREPORT_INTERVAL of 10000msecs Initial delay: 0msecs; heartBeatInterval=3000 2025-06-18 03:09:55,260 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-646bdffe-07a9-484d-b454-006de0117db0 for DN 127.0.0.1:35563 2025-06-18 03:09:55,261 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-e066f37b-b531-4ee0-bee5-fcd49b2a1988 for DN 127.0.0.1:35563 2025-06-18 03:09:55,287 INFO BlockStateChange: BLOCK* processReport 0x77054b4d7060287f: Processing first storage report for DS-e066f37b-b531-4ee0-bee5-fcd49b2a1988 from datanode cd7a2e6c-0178-49c9-846a-e8b940ae9691 2025-06-18 03:09:55,289 INFO BlockStateChange: BLOCK* processReport 0x77054b4d7060287f: from storage DS-e066f37b-b531-4ee0-bee5-fcd49b2a1988 node DatanodeRegistration(127.0.0.1:35563, datanodeUuid=cd7a2e6c-0178-49c9-846a-e8b940ae9691, infoPort=37863, infoSecurePort=0, ipcPort=34819, storageInfo=lv=-57;cid=testClusterID;nsid=2133025505;c=1750226993535), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2025-06-18 03:09:55,289 INFO BlockStateChange: BLOCK* processReport 0x77054b4d7060287f: Processing first storage report for DS-646bdffe-07a9-484d-b454-006de0117db0 from datanode cd7a2e6c-0178-49c9-846a-e8b940ae9691 2025-06-18 03:09:55,289 INFO BlockStateChange: BLOCK* processReport 0x77054b4d7060287f: from storage DS-646bdffe-07a9-484d-b454-006de0117db0 node DatanodeRegistration(127.0.0.1:35563, datanodeUuid=cd7a2e6c-0178-49c9-846a-e8b940ae9691, infoPort=37863, infoSecurePort=0, ipcPort=34819, storageInfo=lv=-57;cid=testClusterID;nsid=2133025505;c=1750226993535), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2025-06-18 03:09:55,300 INFO datanode.DataNode: Successfully sent block report 0x77054b4d7060287f to namenode: localhost/127.0.0.1:12222, containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 4 msecs to generate and 24 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5. 2025-06-18 03:09:55,301 INFO datanode.DataNode: Got finalize command for block pool BP-1824230658-172.17.0.2-1750226993535 2025-06-18 03:09:55,309 INFO hdfs.MiniDFSCluster: Cluster is active 2025-06-18 03:09:55,312 INFO mapreduce.MiniHadoopClusterManager: Started MiniDFSCluster -- namenode on port 12222 2025-06-18 03:12:22,517 INFO namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 13 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 1 0 2025-06-18 03:12:22,535 INFO hdfs.StateChange: BLOCK* allocate blk_1073741825_1001, replicas=127.0.0.1:35563 for /02845_table_function_hdfs_filter_by_virtual_columns_test_8te4booz.data1.tsv 2025-06-18 03:12:22,580 INFO datanode.DataNode: Receiving BP-1824230658-172.17.0.2-1750226993535:blk_1073741825_1001 src: /127.0.0.1:56078 dest: /127.0.0.1:35563 2025-06-18 03:12:22,602 INFO DataNode.clienttrace: src: /127.0.0.1:56078, dest: /127.0.0.1:35563, bytes: 2, op: HDFS_WRITE, cliID: libhdfs3_client_rand_0.441668_count_2_pid_616_tid_140571747907136, offset: 0, srvID: cd7a2e6c-0178-49c9-846a-e8b940ae9691, blockid: BP-1824230658-172.17.0.2-1750226993535:blk_1073741825_1001, duration(ns): 6244083 2025-06-18 03:12:22,602 INFO datanode.DataNode: PacketResponder: BP-1824230658-172.17.0.2-1750226993535:blk_1073741825_1001, type=LAST_IN_PIPELINE terminating 2025-06-18 03:12:22,604 INFO hdfs.StateChange: BLOCK* fsync: /02845_table_function_hdfs_filter_by_virtual_columns_test_8te4booz.data1.tsv for libhdfs3_client_rand_0.441668_count_2_pid_616_tid_140571747907136 2025-06-18 03:12:22,608 INFO namenode.FSNamesystem: BLOCK* blk_1073741825_1001 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /02845_table_function_hdfs_filter_by_virtual_columns_test_8te4booz.data1.tsv 2025-06-18 03:12:23,012 INFO hdfs.StateChange: DIR* completeFile: /02845_table_function_hdfs_filter_by_virtual_columns_test_8te4booz.data1.tsv is closed by libhdfs3_client_rand_0.441668_count_2_pid_616_tid_140571747907136 2025-06-18 03:12:23,378 INFO hdfs.StateChange: BLOCK* allocate blk_1073741826_1002, replicas=127.0.0.1:35563 for /02845_table_function_hdfs_filter_by_virtual_columns_test_8te4booz.data2.tsv 2025-06-18 03:12:23,381 INFO datanode.DataNode: Receiving BP-1824230658-172.17.0.2-1750226993535:blk_1073741826_1002 src: /127.0.0.1:56094 dest: /127.0.0.1:35563 2025-06-18 03:12:23,386 INFO DataNode.clienttrace: src: /127.0.0.1:56094, dest: /127.0.0.1:35563, bytes: 2, op: HDFS_WRITE, cliID: libhdfs3_client_rand_0.526700_count_4_pid_616_tid_140571640227392, offset: 0, srvID: cd7a2e6c-0178-49c9-846a-e8b940ae9691, blockid: BP-1824230658-172.17.0.2-1750226993535:blk_1073741826_1002, duration(ns): 3447191 2025-06-18 03:12:23,386 INFO datanode.DataNode: PacketResponder: BP-1824230658-172.17.0.2-1750226993535:blk_1073741826_1002, type=LAST_IN_PIPELINE terminating 2025-06-18 03:12:23,387 INFO hdfs.StateChange: BLOCK* fsync: /02845_table_function_hdfs_filter_by_virtual_columns_test_8te4booz.data2.tsv for libhdfs3_client_rand_0.526700_count_4_pid_616_tid_140571640227392 2025-06-18 03:12:23,389 INFO hdfs.StateChange: DIR* completeFile: /02845_table_function_hdfs_filter_by_virtual_columns_test_8te4booz.data2.tsv is closed by libhdfs3_client_rand_0.526700_count_4_pid_616_tid_140571640227392 2025-06-18 03:12:23,803 INFO hdfs.StateChange: BLOCK* allocate blk_1073741827_1003, replicas=127.0.0.1:35563 for /02845_table_function_hdfs_filter_by_virtual_columns_test_8te4booz.data3.tsv 2025-06-18 03:12:23,805 INFO datanode.DataNode: Receiving BP-1824230658-172.17.0.2-1750226993535:blk_1073741827_1003 src: /127.0.0.1:56104 dest: /127.0.0.1:35563 2025-06-18 03:12:23,809 INFO DataNode.clienttrace: src: /127.0.0.1:56104, dest: /127.0.0.1:35563, bytes: 2, op: HDFS_WRITE, cliID: libhdfs3_client_rand_0.526700_count_6_pid_616_tid_140571640227392, offset: 0, srvID: cd7a2e6c-0178-49c9-846a-e8b940ae9691, blockid: BP-1824230658-172.17.0.2-1750226993535:blk_1073741827_1003, duration(ns): 2425266 2025-06-18 03:12:23,809 INFO datanode.DataNode: PacketResponder: BP-1824230658-172.17.0.2-1750226993535:blk_1073741827_1003, type=LAST_IN_PIPELINE terminating 2025-06-18 03:12:23,809 INFO hdfs.StateChange: BLOCK* fsync: /02845_table_function_hdfs_filter_by_virtual_columns_test_8te4booz.data3.tsv for libhdfs3_client_rand_0.526700_count_6_pid_616_tid_140571640227392 2025-06-18 03:12:23,810 INFO hdfs.StateChange: DIR* completeFile: /02845_table_function_hdfs_filter_by_virtual_columns_test_8te4booz.data3.tsv is closed by libhdfs3_client_rand_0.526700_count_6_pid_616_tid_140571640227392 2025-06-18 03:18:31,270 INFO BlockStateChange: BLOCK* processReport 0x77054b4d70602880: from storage DS-e066f37b-b531-4ee0-bee5-fcd49b2a1988 node DatanodeRegistration(127.0.0.1:35563, datanodeUuid=cd7a2e6c-0178-49c9-846a-e8b940ae9691, infoPort=37863, infoSecurePort=0, ipcPort=34819, storageInfo=lv=-57;cid=testClusterID;nsid=2133025505;c=1750226993535), blocks: 1, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2025-06-18 03:18:31,270 INFO BlockStateChange: BLOCK* processReport 0x77054b4d70602880: from storage DS-646bdffe-07a9-484d-b454-006de0117db0 node DatanodeRegistration(127.0.0.1:35563, datanodeUuid=cd7a2e6c-0178-49c9-846a-e8b940ae9691, infoPort=37863, infoSecurePort=0, ipcPort=34819, storageInfo=lv=-57;cid=testClusterID;nsid=2133025505;c=1750226993535), blocks: 2, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2025-06-18 03:18:31,271 INFO datanode.DataNode: Successfully sent block report 0x77054b4d70602880 to namenode: localhost/127.0.0.1:12222, containing 2 storage report(s), of which we sent 2. The reports had 3 total blocks and used 1 RPC(s). This took 1 msecs to generate and 2 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5. 2025-06-18 03:18:31,271 INFO datanode.DataNode: Got finalize command for block pool BP-1824230658-172.17.0.2-1750226993535